Skip to content

Conversation

keehyuna
Copy link
Collaborator

@keehyuna keehyuna commented Aug 1, 2025

Description

Support pre-quantized HF models and post-training quantization (PTQ) option for run_llm.py

Fixes # (issue)

Type of change

  • New feature (non-breaking change which adds functionality)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@meta-cla meta-cla bot added the cla signed label Aug 1, 2025
@keehyuna keehyuna self-assigned this Aug 6, 2025
@keehyuna keehyuna changed the title fp8 pre-quantized model support Pre-quantized model support Aug 7, 2025
@keehyuna keehyuna changed the title Pre-quantized model support Feat: Pre-quantized LLM model support Aug 7, 2025
@keehyuna keehyuna marked this pull request as ready for review August 7, 2025 12:39
@keehyuna keehyuna requested review from narendasan and peri044 and removed request for narendasan August 8, 2025 06:44
return model


class TensorRTQuantizedLinear(torch.nn.Module):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@peri044 Is this something we might want to upstream to ModelOpt in the future?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or pull into main torch-tensorrt as a pass?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess its somewhat HF specific, so remaining in this tool would make sense but are there some parts we could make generic for any sort of quantization workflow (e.g. torchao)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I think quantize_model() can be moved to function like torch_tensorrt.dynamo.quantize(). Currently investigating how to separate the calibration data path from the quantization logic


hf_quant_algo = hf_quant_config.pop("quant_algo", None)
if hf_quant_algo != "FP8" and hf_quant_algo != "NVFP4":
raise RuntimeError("Only FP8 or NVFP4 quantization is supported")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would it be different for MXFP4?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looked at quantization cfg in modelopt

NVFP4_DEFAULT_CFG NVFP4 has E4M3 scales and a block size is 16.

MXFP4_DEFAULT_CFG MXFP4 has E8M0 scales and a block size is 32.

@github-actions github-actions bot added component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Sep 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants